687 research outputs found

    The Pass-Through of RIN Prices to Wholesale and Retail Fuels under the Renewable Fuel Standard

    Get PDF
    The US Renewable Fuel Standard (RFS) requires blending increasing quantities of biofuels into the surface vehicle fuel supply. The RFS requirements are met through a system of tradable permits called Renewable (fuel) Identification Numbers, or RINs. We exploit the large fluctuations in RIN prices during 2013–15 to estimate the pass-through of RIN prices to US wholesale and retail fuel prices. We control for common factors by examining spreads of physically similar fuels with different RIN obligations. Pooling six different wholesale petroleum fuel spreads, we estimate a pooled long-run or equilibrium pass-through coefficient of 1.00 with a standard error of 0.11. This pass-through occurs within two business days. The only fuel for which we find economically and statistically significant failure of pass-through is retail E85, which contains up to 83% ethanol; the pass-through of RIN prices to the retail E85–E10 spread is precisely estimated to be close to zero. Keywords: E85; Energy prices; Fuels markers; RBOB; Retail fuel spreads; Wholesale fuel spread

    Blacklisting Malicious Websites using Peer-to-Peer Technology

    Get PDF
    The misuse of websites to serve exploit code to compromise hosts on the Internet has increased drastically in the recent years. With new methods like Fast- or Domain Fluxing the attackers have found ways to generate thousands of links leading to malicious webservers in a very short time. With the help of the distributed blacklist solution we propose in this paper we are able to quickly respond to new threats and have the ability to involve different sources to collect information about malicious websites. It is therefore possible to protect networks from threats that they have not even been targeted for yet, by sharing attack information globally

    Effectiveness of antenatal corticosteroids at term:Can we trust the data that 'inform' us?

    Get PDF
    Randomized controlled trials (RCTs) are a cornerstone for the assessment of the effectiveness of interventions. Appropriate randomization, design, sample size, statistical analyses, and conduct that reduces the risk of bias, enhance the chance they will deliver true research findings. The credibility of RCTs is difficult to assess without objective evidence of compliance with Good Clinical Practice standards. Remarkably, no mechanisms are in place both in the initial peer review process and during meta-analysis to assess these, and little guidance on how to assess data where research integrity cannot be confirmed (e.g. where data originated from a setting without established infrastructure or an era preceding current standards). We describe the case of the use of antenatal steroids. When these drugs are used in early preterm birth, their benefits outweigh the harms. However, later in pregnancy, and specifically at term, this balance is less clear. We describe that the four randomised clinical trials that inform clinical practice through the Cochrane meta-analysis, for various reasons, lack clear governance which makes it difficult to verify provenance and reliability of the data. We conclude that transparency and assessment of data credibility need to be inbuilt both at the time of publication and at the time of meta-analysis. This will drive up standards and encourage appropriate interpretation of results and the context from which they were derived.Ben W.Mol, Wentao Li, Shimona Lai, Sarah Stoc

    HTML Violations and Where to Find Them: A Longitudinal Analysis of Specification Violations in HTML

    Get PDF
    With the increased interest in the web in the 90s, everyone wanted to have their own website. However, given the lack of knowledge, such pages contained numerous HTML specification violations. This was when browser vendors came up with a new feature – error tolerance. This feature, part of browsers ever since, makes the HTML parsers tolerate and instead fix violations temporarily. On the downside, it risks security issues like Mutation XSS and Dangling Markup. In this paper, we asked ourselves, do we still need to rely on this error tolerance, or can we abandon this security issue? To answer this question, we study the evolution of HTML violations over the past eight years. To this end, we identify security-relevant violations and leverage Common Crawl to check archived pages for these. Using this framework, we automatically analyze over 23K popular domains over time. This analysis reveals that while the number of violations has decreased over the years, more than 68% of all domains still contain at least one HTML violation today. While this number is obviously too high for browser vendors to tighten the parsing process immediately, we show that automatic approaches could quickly correct up to 46% of today’s violations. Based on our findings, we propose a roadmap for how we could tighten this process to improve the quality of HTML markup in the long run

    A Tale of Two Headers: A Formal Analysis of Inconsistent Click-Jacking Protection on the Web

    Get PDF
    Click-jacking protection on the modern Web is commonly enforced via client-side security mechanisms for framing control, like the X-Frame-Options header (XFO) and Content Security Policy (CSP). Though these client-side security mechanisms are certainly useful and successful, delegating protection to web browsers opens room for inconsistencies in the security guarantees offered to users of different browsers. In particular, inconsistencies might arise due to the lack of support for CSP and the different implementations of the underspecified XFO header. In this paper, we formally study the problem of inconsistencies in framing control policies across different browsers and we implement an automated policy analyzer based on our theory, which we use to assess the state of click-jacking protection on the Web. Our analysis shows that 10% of the (distinct) framing control policies in the wild are inconsistent and most often do not provide any level of protection to at least one browser. We thus propose recommendations for web developers and browser vendors to mitigate this issue. Finally, we design and implement a server-side proxy to retrofit security in web applications

    Extended Hell(o): A Comprehensive Large-Scale Study on Email Confidentiality and Integrity Mechanisms in the Wild

    Get PDF
    The core specifications of electronic mail as used today date back as early as the 1970s. At that time, security did not play a significant role in developing communication protocols. These shortcomings still manifest themselves today in the prevalence of phishing and the reliance on opportunistic encryption. Besides STARTTLS, various mechanisms such as SPF, DKIM, DMARC, DANE, and MTA-STS have been proposed. However, related work has shown that not all providers support them and that misconfigurations are common. In this work, we provide a comprehensive overview of the current state of email confidentiality and integrity measures, as well as the effectiveness of their deployment. On a positive note, support for incoming TLS connections has significantly increased over the years, with over 96% of reachable MXs in the top 10 million domains allowing for explicit TLS. Notably, 30% of presented certificates are invalid, though, with the majority of issues related to the presented hostnames. In light of this, all 47 providers we tested connect to hosts with expired, self-signed, non-matching certificates, making it trivial for attackers to intercept their connections. Our analysis also shows that still only around 40% of sites specify SPF, and even high-ranked providers like t-online.de do not enforce it. Similarly, while DNS lookups are performed for both DKIM and DANE, neither mechanism is validated or enforced by all providers. In addition, we show that MTA-STS is only slowly getting traction (six providers support it) and provide the first large-scale analysis into OPENPGPKEY and SMIMEA records. All in all, this still paints a grim yet slightly improving picture for the state of email security by late 2022

    Kizzle: A Signature Compiler for Detecting Exploit Kits

    Get PDF
    In recent years, the drive-by malware space has undergone significant consolidation. Today, the most common source of drive-by downloads are socalled exploit kits (EKs). This paper presents Kizzle, the first prevention technique specifically designed for finding exploit kits. Our analysis shows that while the JavaScript delivered by kits varies greatly, the unpacked code varies much less, due to the kits authors’ code reuse between versions. Ironically, this well-regarded software engineering practice allows us to build a scalable and precise detector that is able to quickly respond to superficial but frequent changes in EKs. Kizzle is able to generate anti-virus signatures for detecting EKs, which compare favorably to manually created ones. Kizzle is highly responsive and can generate new signatures within hours. Our experiments show that Kizzle produces high-accuracy signatures. When evaluated over a four-week period, false-positive rates for Kizzle are under 0.03%, while the false-negative rates are under 5%

    Eradicating DNS Rebinding with the Extended Same-Origin Policy

    Get PDF
    The Web’s principal security policy is the Same-Origin Policy (SOP), which enforces origin-based isolation of mutually distrusting Web applications. Since the early days, the SOP was repeatedly undermined with variants of the DNS Rebinding attack, allowing untrusted script code to gain illegitimate access to protected network resources. To counter these attacks, the browser vendors introduced countermeasures, such as DNS Pinning, to mitigate the attack. In this paper, we present a novel DNS Rebinding attack method leveraging the HTML5 Application Cache. Our attack allows reliable DNS Rebinding attacks, circumventing all currently deployed browser-based defense measures. Furthermore, we analyze the fundamental problem which allows DNS Rebinding to work in the first place: The SOP’s main purpose is to ensure security boundaries of Web servers. However, the Web servers themselves are only indirectly involved in the corresponding security decision. Instead, the SOP relies on information obtained from the domain name system, which is not necessarily controlled by the Web server’s owners. This mismatch is exploited by DNS Rebinding. Based on this insight, we propose a light-weight extension to the SOP which takes Web server provided information into account. We successfully implemented our extended SOP for the Chromium Web browser and report on our implementation’s interoperability and security properties

    JStap: A Static Pre-Filter for Malicious JavaScript Detection

    Get PDF
    Given the success of the Web platform, attackers have abused its main programming language, namely JavaScript, to mount different types of attacks on their victims. Due to the large volume of such malicious scripts, detection systems rely on static analyses to quickly process the vast majority of samples. These static approaches are not infallible though and lead to misclassifications. Also, they lack semantic information to go beyond purely syntactic approaches. In this paper, we propose JStap, a modular static JavaScript detection system, which extends the detection capability of existing lexical and AST-based pipelines by also leveraging control and data flow information. Our detector is composed of ten modules, including five different ways of abstracting code, with differing levels of context and semantic information, and two ways of extracting features. Based on the frequency of these specific patterns, we train a random forest classifier for each module. In practice, JStap outperforms existing systems, which we reimplemented and tested on our dataset totaling over 270,000 samples. To improve the detection, we also combine the predictions of several modules. A first layer of unanimous voting classifies 93% of our dataset with an accuracy of 99.73%, while a second layer--based on an alternative modules' combination--labels another 6.5% of our initial dataset with an accuracy over 99%. This way, JStap can be used as a precise pre-filter, meaning that it would only need to forward less than 1% of samples to additional analyses. For reproducibility and direct deployability of our modules, we make our system publicly available (https://github.com/Aurore54F/JStap)

    The Leaky Web: Automated Discovery of Cross-Site Information Leaks in Browsers and the Web

    Get PDF
    When browsing the web, none of us want sites to infer which other sites we may have visited before or are logged in to. However, attacker-controlled sites may infer this state through browser side-channels dubbed Cross-Site Leaks (XS-Leaks). Although these issues have been known since the 2000s, prior reports mostly found individual instances of issues rather than systematically studying the problem space. Further, actual impact in the wild often remained opaque. To address these open problems, we develop the first automated framework to systematically discover observation channels in browsers. In doing so, we detect and characterize 280 observation channels that leak information cross-site in the engines of Chromium, Firefox, and Safari, which include many variations of supposedly fixed leaks. Atop this framework, we create an automatic pipeline to find XS-Leaks in real-world websites. With this pipeline, we conduct the largest to-date study on XS-Leak prevalence in the wild by performing visit inference and a newly proposed variant cookie acceptance inference attack on the Tranco Top10K. In addition, we test 100 websites for the classic XS-Leak attack vector of login detection. Our results show that XS-Leaks pose a significant threat to the web ecosystem as at least 15%, 34%, and 77% of all tested sites are vulnerable to the three attacks. Also, we present substantial implementation differences between the browsers resulting in differing attack surfaces that matter in the wild. To ensure browser vendors and web developers alike can check their applications for XS-Leaks, we open-source our framework and include an extensive discussion on countermeasures to get rid of XS-Leaks in the near future and ensure new features in browsers do not introduce new XS-Leaks
    • …
    corecore